Of late, insurance fraud detection has assumed immense significance owing to the huge financial & reputational losses fraud entails and the phenomenal success of the fraud detection techniques. Insurance is majorly divided into two categories: (i) Life and (ii) Non-life. Non-life insurance in turn includes health insurance and auto insurance among other things. In either of the categories, the fraud detection techniques should be designed in such a way that they capture as many fraudulent transactions as possible. Owing to the rarity of fraudulent transactions, in this paper, we propose a chaotic variational autoencoder (C-VAE to perform one-class classification (OCC) on genuine transactions. Here, we employed the logistic chaotic map to generate random noise in the latent space. The effectiveness of C-VAE is demonstrated on the health insurance fraud and auto insurance datasets. We considered vanilla Variational Auto Encoder (VAE) as the baseline. It is observed that C-VAE outperformed VAE in both datasets. C-VAE achieved a classification rate of 77.9% and 87.25% in health and automobile insurance datasets respectively. Further, the t-test conducted at 1% level of significance and 18 degrees of freedom infers that C-VAE is statistically significant than the VAE.
translated by 谷歌翻译
Applications such as employees sharing office spaces over a workweek can be modeled as problems where agents are matched to resources over multiple rounds. Agents' requirements limit the set of compatible resources and the rounds in which they want to be matched. Viewing such an application as a multi-round matching problem on a bipartite compatibility graph between agents and resources, we show that a solution (i.e., a set of matchings, with one matching per round) can be found efficiently if one exists. To cope with situations where a solution does not exist, we consider two extensions. In the first extension, a benefit function is defined for each agent and the objective is to find a multi-round matching to maximize the total benefit. For a general class of benefit functions satisfying certain properties (including diminishing returns), we show that this multi-round matching problem is efficiently solvable. This class includes utilitarian and Rawlsian welfare functions. For another benefit function, we show that the maximization problem is NP-hard. In the second extension, the objective is to generate advice to each agent (i.e., a subset of requirements to be relaxed) subject to a budget constraint so that the agent can be matched. We show that this budget-constrained advice generation problem is NP-hard. For this problem, we develop an integer linear programming formulation as well as a heuristic based on local search. We experimentally evaluate our algorithms on synthetic networks and apply them to two real-world situations: shared office spaces and matching courses to classrooms.
translated by 谷歌翻译
可解释的AI(XAI)是一个重要的发展领域,但仍相对研究用于聚类。我们提出了一种可解释的划分聚类方法,不仅可以找到集群,而且还可以解释每个群集。基于典范的心理学概念学院的使用支持了示例示例的理解。我们表明,找到一小部分示例来解释即使是一个群集也是计算上的棘手。因此,总体问题具有挑战性。我们开发了一种近似算法,该算法可为聚类质量以及所使用的示例数量提供可证明的性能保证。该基本算法解释了每个集群中的所有实例,而另一种近似算法则使用有界数的示例来允许更简单的解释,并证明涵盖了所有实例的大部分。实验结果表明,我们的工作在涉及很难理解图像和文本深层嵌入的领域中很有用。
translated by 谷歌翻译
许多情况下,具有限制代理商竞争资源的代理商可以作为两分图上的最大匹配问题施放。我们的重点是资源分配问题,在这些问题上,代理可能会限制与某些资源不兼容的限制。我们假设一个原理可以随机选择最大匹配,以便每个代理都具有一定概率的资源。代理商希望通过在一定范围内修改限制来提高他们的匹配机会。原则的目标是建议一个不满意的代理商放松其限制,以便放松的总成本在预算范围内(代理商选择),并最大程度地提高了分配资源的可能性。我们为这种预算受限的最大化问题的某些变体建立硬度结果,并为其他变体提供算法结果。我们通过实验评估合成数据集以及两个新颖的现实数据集:度假活动数据集和一个教室数据集的方法。
translated by 谷歌翻译
基于深度学习(DL)的降尺度已成为地球科学中的流行工具。越来越多的DL方法被采用来降低降水量的降水量数据,并在局部(〜几公里甚至更小)的尺度上产生更准确和可靠的估计值。尽管有几项研究采用了降水的动力学或统计缩减,但准确性受地面真理的可用性受到限制。衡量此类方法准确性的一个关键挑战是将缩小的数据与点尺度观测值进行比较,这些观察值通常在如此小的尺度上是无法使用的。在这项工作中,我们进行了基于DL的缩减,以估计印度气象部(IMD)的当地降水数据,该数据是通过近似从车站位置到网格点的价值而创建的。为了测试不同DL方法的疗效,我们采用了四种不同的缩小方法并评估其性能。所考虑的方法是(i)深度统计缩小(DEEPSD),增强卷积长期记忆(ConvlstM),完全卷积网络(U-NET)和超分辨率生成对抗网络(SR-GAN)。 SR-GAN中使用的自定义VGG网络是在这项工作中使用沉淀数据开发的。结果表明,SR-GAN是降水数据缩减的最佳方法。 IMD站的降水值验证了缩小的数据。这种DL方法为统计缩减提供了有希望的替代方法。
translated by 谷歌翻译
该调查侧重于地球系统科学中的当前问题,其中可以应用机器学习算法。它概述了以前的工作,在地球科学部,印度政府的持续工作,以及ML算法的未来应用到一些重要的地球科学问题。我们提供了与本次调查的比较的比较,这是与机器学习相关的多维地区的思想地图,以及地球系统科学(ESS)中机器学习的Gartner的炒作周期。我们主要关注地球科学的关键组成部分,包括大气,海洋,地震学和生物圈,以及覆盖AI / ML应用程序统计侦查和预测问题。
translated by 谷歌翻译
深入学习的成功已归功于大量数据培训大量的过度公正模型。随着这种趋势的继续,模型培训已经过分昂贵,需要获得强大的计算系统来培训最先进的网络。一大堆研究已经致力于通过各种模型压缩技术解决训练的迭代的成本,如修剪和量化。花费较少的努力来定位迭代的数量。以前的工作,例如忘记得分和宏伟/ el2n分数,通过识别完整数据集中的重要样本并修剪剩余的样本来解决这个问题,从而减少每时代的迭代。虽然这些方法降低了训练时间,但它们在训练前使用昂贵的静态评分算法。在计入得分机制时,通常会增加总运行时间。在这项工作中,我们通过动态数据修剪算法解决了这种缺点。令人惊讶的是,我们发现均匀的随机动态修剪可以以积极的修剪速率更优于现有的工作。我们将其归因于存在“有时”样本 - 对学习决策边界很重要的点,只有一些培训时间。为了更好地利用有时样本的微妙性,我们提出了基于加强学习技术的两种算法,以动态修剪样本并实现比随机动态方法更高的准确性。我们针对全数据集基线和CIFAR-10和CIFAR-100上的先前工作测试所有方法,我们可以将培训时间降低到2倍,而无明显的性能损失。我们的结果表明,数据修剪应理解为与模型的训练轨迹密切相关的动态过程,而不是仅基于数据集的静态步骤。
translated by 谷歌翻译
本文探讨了超线性增长趋势的环境影响,从整体角度来看,跨越数据,算法和系统硬件。我们通过在行业规模机器学习用例中检查模型开发周期来表征AI计算的碳足迹,同时考虑系统硬件的生命周期。进一步迈出一步,我们捕获AI计算的操作和制造碳足迹,并为硬件 - 软件设计和尺度优化的结束分析以及如何帮助降低AI的整体碳足迹。根据行业经验和经验教训,我们分享关键挑战,并在AI的许多方面上绘制了重要的发展方向。我们希望本文提出的关键信息和见解能够激发社区以环保的方式推进AI领域。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译